![]() SYSTEM AND METHOD FOR DIGITAL MAKE-UP MIRROR
专利摘要:
A method implemented by a computer to emulate a mirror using a camera video stream and a display screen to generate a digital mirror. The digital mirror is particularly configured for very close-up applications, such as makeup and eyeglass fitting sessions. It is expected that the appearance of the face on the screen can be corrected. Specific implementations allow the tracking of facial movement or specific facial features and the application of virtual makeup or virtual glasses or other accessories or other filters to the face. In addition, recording sessions and self-editing allow the user to easily access the tutorial session and the products used during the session. The products can be ordered from the mobile device of the user at any time. 公开号:FR3053502A1 申请号:FR1753100 申请日:2017-04-10 公开日:2018-01-05 发明作者:Nissi Vilcovsky;Ofer Saban 申请人:EyesMatch Ltd Great Britain; IPC主号:
专利说明:
Field The present invention relates to digital mirrors and, more specifically, digital mirrors which are specifically configured for very close-ups, for example for makeup sessions and eyewear fitting sessions. Related art The classic mirror (i.e., a reflective surface) is the common and most reliable tool for an individual to explore is real appearance in real time. Some variants have been proposed around the combination of a camera and a screen to replace the conventional mirror. However, these techniques are not convincing and are not yet accepted as a reliable image of the individual as if he were looking at himself in a conventional mirror. This is mainly due to the fact that the image generated by a camera is very different from an image generated by a mirror. The applicants have previously presented original technologies for converting and transforming a still image or a 2D or 3D video created by one or more cameras, with or without other sensors, into a mirror or video conference experience. Examples of applicants' embodiments are described, for example, in US Pat. 948 481 and n ° 8 982 109. The embodiments presented in these can be implemented for any general use of a mirror. The applicant continued with other presentations concerning the adaptation of the mirror to specific needs, such as, for example, clothing stores. Examples of applicants' embodiments are described, for example, in US Pat. 976 160 and n ° 8 982 110. In many department stores and beauty stores, demonstration makeup sessions are carried out for customers. The goal is that, if the customer likes the result, the customer will purchase some of the items used during the demonstration. However, once the session is over and the customer has left the store, the customer may no longer remember the products that have been used and also how to apply them. In addition, sometimes the client may wish to try several different products, for example to compare different colors of lipstick, but would not like to apply and remove different makeup products successively. ABSTRACT The following summary of the presentation is included to provide a basic understanding of certain aspects and features of the invention. This summary is not an extended overview of the invention and, as such, it is not intended to specifically identify key or critical elements of the invention or to define the scope of the invention . Its sole purpose is to present certain concepts of the invention in a simplified form as a prelude to the more detailed description which is presented below. The embodiments presented include a transformation module which transforms the video stream received from the camera and generates a transformed stream which, when projected on the control screen, causes the image to appear as an image of a mirror. As can be experienced with devices having cameras mounted above the screen (for example, a video conference on a laptop), the image generated is not personal, since the user seems to be looking away from the camera. This is actually the case, because the user is looking directly at the screen, but the camera is positioned above the screen. Therefore, the transformation module transforms each frame (i.e., each image) so that it appears to have been taken by a camera positioned behind the screen - i.e. image appears as if the user is looking directly at a camera positioned behind the screen, even if the image is taken by a camera positioned above or next to the screen. The translation module adjusts the presentation of the image on the screen, so that the face appears centered on the screen, regardless of the size of the user. An eye adjustment unit transforms the image of the user's eyes, so it looks like the eyes are centered and looking directly at the screen - just like looking at a mirror. In addition, an augmented reality module allows the application of virtual makeup on the image of the user projected on the control screen. According to the embodiments presented, a system is provided for capturing, memorizing and reorganizing a makeup session - whether it is real or virtual. A demonstration makeup session is performed using any embodiment of the digital mirrors described in this document. The demo makeup session is recorded for any length of time you want (for example, usually 5 to 20 minutes). This recorded session is stored in a storage device, for example, a local server or a cloud computing server. In some modes of marking the user of achievement, the stored session is shared with the client, for example, by providing a link to the client, so that the client is able to review how the makeup artist applied the makeup. According to one embodiment, the shared video is then edited by cutting it into sub-sessions. Sub-sessions may include, for example: foundation, powder, self-tanner, concealer, lipstick, lip gloss, mascara, eye shadow, cheek red, eyebrow shade and eyeliner. The start of each of these sub-sessions is marked in the stored session video, for example, with metadata, and icons are generated, each icon having a link to a corresponding mark. Do this each sub-session and allow you to skip or jump to particular sessions that interest him. By "trimming" the video according to the sub-sessions, the client will be able to quickly browse specific sub-sessions and browse the history by sub-session. Sessions and sub-sessions can be reviewed using postage stamps, icons, etc. Aspects of the invention include a digital makeup mirror comprising: a digital screen; a digital camera positioned to generate a video stream of a user's face; a controller coupled to the digital screen and the digital camera and preprogrammed to perform the operations comprising: receiving the video stream from the camera; identifying facial features in the video stream; flipping each frame around a vertical axis to replace the right side with the left side of the image, thereby imitating a mirror image; transforming each frame to emulate an image of the user looking directly at the screen as if there were a camera positioned directly behind the screen; the framing of each frame to allow the display of the user's face in the center of the digital screen; and displaying the video stream on the digital screen after the inversion, transformation and framing operations. The controller can further perform the operation comprising identifying facial features in each frame of the video stream so as to track the location of each of the facial features in each frame. Facial features include lips, eyes and eyebrows, chin and nose. The identification of facial features may include identifying the contours of the facial features. The identification of the facial features may include the identification of the pixels belonging to each facial feature. The controller can also perform the operation comprising: the display on the digital screen of a palette of colors corresponding to a makeup category; allowing the user to designate a selection from the color palette; and the digital application of the selection to one of the facial characteristics corresponding to the makeup category. The numerical application of the selection may include changing the attributes of the pixels belonging to the facial characteristic. The display of the color palette may include displaying a plurality of colors and a plurality of color attributes. Color attributes can include at least transparency and shine. The controller can also perform the operation of allowing the user to modify the attributes after the digital application of the selection. The digital application of the selection may further include tracking the location of each of the facial features in each frame and applying a change in the digital application of the selection in accordance with the movement of the facial feature in each frame. The digital application of the selection can be performed using a mask having a shape of a selected facial feature. The digital make-up mirror can also comprise a correspondence table associating the color palette and the product data. The controller can also carry out the operation comprising the display of a product corresponding to the designated selection. The controller can further perform the operation comprising: displaying, on the digital screen, a plurality of preprogrammed makeup looks; allowing the user to designate a selection from the makeup looks; and the digital application of the selection to the face of the user projected onto the digital screen. The controller can also carry out the operation comprising the display, on the screen, of images of the products used during the generation of the make-up look. The digital makeup mirror may further include a lighting device comprising a plurality of light sources of a plurality of temperatures. The controller can further perform the operation comprising changing the lighting intensity of the plurality of light sources to generate a desired overall light temperature. The controller can also carry out the operation comprising: the projection, on the digital screen, of a selection of lighting temperatures; allowing the user to designate a selection from the selection of color temperatures; and changing the lighting intensity of the plurality of light sources according to the designation of the user. BRIEF DESCRIPTION OF THE DRAWINGS Other aspects and features of the invention will be apparent from the detailed description, which is given with reference to the accompanying drawings. It should be appreciated that the detailed description and the drawings provide various non-limiting examples of various embodiments of the invention, which is defined by the appended claims. The accompanying drawings, which are incorporated into and form part of this specification, illustrate the embodiments of the present invention and, together with the description, serve to explain and illustrate the principles of the invention. The drawings are intended to illustrate the main features of the exemplary embodiments in a schematic manner. The drawings are not intended to represent all the characteristics of the real embodiments, nor the relative dimensions of the elements represented, and are not drawn to scale. Figure 1 is a system block diagram for an augmented reality platform supporting real-time or recorded video / image according to one embodiment. FIG. 2 represents an embodiment of an augmented reality module, which can correspond to the augmented reality module of FIG. 1. FIG. 3 represents an embodiment of an augmented reality module which can change the appearance of a body part, the color, the orientation and the texture of a facial characteristic or of an article or of a object in the foreground or background of the image, and which can be used in the makeup session presented in this document. FIG. 4 shows an embodiment of calculation methods for creating a model for the change in color and texture and / or complexion, which can be referred to as Colograma. FIG. 5 illustrates an example of the digital mirror according to an embodiment of the invention. FIG. 6 illustrates a general processing flow executed by the digital makeup mirror to simulate a makeup session according to one embodiment. FIG. 7 illustrates a flow diagram for performing a virtual makeup session according to one embodiment. DETAILED DESCRIPTION The following examples illustrate some embodiments and aspects of the invention. It will be obvious to those skilled in the art that various modifications, additions, substitutions and the like can be made without altering the spirit or scope of the invention, and these modifications and variations are encompassed within the scope of the invention as defined in the claims which follow. The following examples do not limit the invention in any way. The embodiments of the invention involve both hardware and software designs which are particularly customized for use as a close-up mirror, i.e., in a situation in which the user observes the face , for example to apply makeup, hairstyle, or a fitting of glasses. These situations require different considerations when virtualizing a mirror. Part of the difficulty stems from the fact that the controller screen must be placed relatively close to the user's face, so that the camera is also very close - which generates distortions. That is, while the user is looking directly at the screen, the camera obtains an image from the top of the screen, so it appears that the user is not looking directly at the camera . This is an unnatural point of view for the user, who is used to looking at a mirror where the eyes seem to be looking directly at the mirror. In addition, there are also proximity distortions in which the body parts closer to the camera appear larger than those further away. Finally, the placement of the user's face relative to the screen frame will differ depending on the size of the user. Lighting considerations are also critical when emulating a mirror that is close to the user's face. This is particularly important when the digital mirror is used to inspect makeup, in which the appearance of the makeup colors should be as close as possible to the appearance of the makeup in daylight. Figure 1 is a system block diagram for an augmented reality platform supporting real-time or recorded video / image. The system may include one or a plurality (l: n) of input devices 101 including a video camera, a still camera, an infrared camera, a 2D camera or a 3D camera. The input device 101 may be designed to send information to one or more augmented reality modules of artificial vision 102, 103, 104, 105, 107, 108 and 109. Said one or more augmented reality modules of artificial vision 102 , 103, 104, 105, 107, 108 and 109 can be designed to send information to one or a plurality (l: m) of display screens 106. Said one or more augmented reality modules of artificial vision 102, 103, 104, 105, 107, 108 and 109 can be designed to send / receive information to / from an interface or to / from a user interface module 110. The interface 110 can be designed to send / receive information to / from one or more of a cloud, web / memory, or user dispositifο device, for example, a smartphone or tablet. Note also that, in certain embodiments, the user interface is implemented in the form of a touch screen of the display screen 106. Said one or more artificial vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 may include an image capture module 102, an eye adjustment transformation module 103, a reality module augmented 104, a video / photography recording module 105, a trigger event module 107, a control element module 108 and a factory calibration module 109. The image capture module 102 may include one or more of the following features: enhancement filters, format conversion, video frame separation, image framing, image resizing, image stitching images and the like. The image capture module 102 can be designed to send information to the eye adjustment transformation module 103. The image capture module 102 can be designed to send and / or receive information to / from the image module trigger event 107. The eye adjustment transformation module 103 can be designed to apply the correct mapping to the image in order to match the point of view of the camera with the theoretical point of view of the mirror (the reflection of the user's eyes). and fill in the empty pixels if there are any after the mapping. The eye adjustment transformation module 103 also performs image translation to place the image of the face in the center within the frame of the display screen, regardless of the size or position of the user. Thus, the eye adjustment transformation module 103 can perform two distinct functions: an image transformation to modify the digital image captured by the input device 101 so as to imitate a mirror image on the screen of display, and an image translation to center the image in the frame of the display screen. Image transformation may include centering the pupils in the user's eyes, so that it appears that the camera was placed directly behind the screen during image capture. The eye adjustment transformation module 103 can be designed to send information to the augmented reality module 104 and / or to the video / photography recording module 105. Furthermore, the eye adjustment transformation module 103 may be designed to send / receive information to / from the control element module 108. In addition, the eye adjustment transformation module 103 may be designed to send information to said screen or to the plurality of screens 106. The augmented reality module 104 can be designed, for example, to perform virtual color and texture replacement, virtual wrapping, object insertion and the like. In the specific embodiments presented in this document, the augmented reality module 104 is configured to modify the color and the intensity of the pixels selected so as to produce virtualized makeup. In other embodiments, the augmented reality module 104 is configured to overlay an image on the user's face, for example, to virtualize glasses on the user's face. The augmented reality module 104 can be designed to send / receive information to / from the control unit module 108 and / or to the video / photography recording module 105. Furthermore, the augmented reality module 104 may be designed to send information to the screen or to the plurality of screens 106. following: mirror, controls The video and / or photography recording module 105 can be designed to record a single image or a short video based on a software command. The video / photography recording module 105 can be designed to send / receive information to / from the control unit module 108. Furthermore, the video / photography recording module 105 can be designed to send information to the screen or to the plurality of screens 106. The trigger event module 107 is optional and may include one or more of the characteristics of recognizing a user in front of facial recognition, recognizing user gestures, recognizing an article, measuring distance, measuring / user body estimates (including, for example, height, age, weight, ethnicity, gender, and the like) and calculation from the user's theoretical point of view in a theoretical mirror. In embodiments relating to makeup sessions, the trigger event module can be configured to identify the user's skin color and complexion, and this information can be used to adjust the lighting. The trigger event module 107 can be designed to send / receive information to / from the control element module 108. The control element module 108 may include one or more of the following characteristics: the control and management of the adjustment of the camera in order to optimize the quality, management of the color adjustment for the control and the (temperature) and lighting intensity, setting other hardware, interface between algorithm modules and higher level code / application / user interfaces, and pushing factory calibration data into elements of the algorithm. The control module can be arranged information to / from the module to send / receive factory calibration 109. The factory calibration module 109 can be designed to define the mapping transformation between the camera and the viewpoint of the user in front of the screen. Furthermore, the factory calibration module 109 can be designed to calibrate the image on the basis of distance, a special location, the size of the user (translation), another measurement of geometry. of the screen or any combination thereof. FIG. 1 and the description which follows represent just examples of an embodiment of the present invention; other processing flows or functionalities can be allocated between the modules, representing additional embodiments which form part of the invention. The present inventors propose two methods for obtaining the augmented reality capacities (in real time and offline). Both methods implement the augmented reality module 104 with real image or video data which is in real time or which was taken after the processing via, for example, the adjustment transformation module. eyes 103. A feature is that a user can define manually or automatically (via, for example, the interface 110) the items that the user would like to process and handle and the expected end result, for example, a rule. automated can be something like a look for a user's lips, which can then be changed to a different color using preprogrammed lipstick colors. Then, the selected object can be processed and extracted / segmented and saved in the database linked to the video or the original recorded image. The augmented reality module 104 can then process the model / mask in real time at a given frame frequency, which can be a frame frequency lower or higher than that of the original and at the same size or at a difference size of that of the original. For example, once the lips are extracted, the appearance of the lips can be changed by proper coloring - creating the impression of lips of a desired shape. In one example, different lip shapes are stored beforehand, and the user can select a combination of lip shape and lipstick color and the augmented reality module 104 will reproduce it on the image of the user by in real time, allowing the user to see what this makeup will look like and to experiment with different shaping techniques and different colors and contours. Examples of the applications presented can include augmented reality live, the possibility of trying on different make-ups and / or glasses when the user would like to be seen with the modification (one or more options). Once the object extracted from the real scene has been saved, it is easier to reproduce multiple changes (color, texture, size and the like) by acting on the pixels identified as belonging to the extracted object. In addition, it is easier to carry out a longer process, with much more precision, with higher quality and using a process that produces more information, for example, user movement, measurements body, and a quality based on the integration of weft and the like. For video input, it is strongly recommended that the rendering process is performed in a DSP or GPU device to avoid introducing a delay into the video. In the trigger event module 107, part of the trigger functionality can be fully automated, for example, a process can be started if face detection or presence detection is performed. The process can be a transformation and a translation of the video stream. Some of the triggers can be done semi-automatically from the user interface module 110, which can include any way of controlling the computerized device. Part of the functionality of the trigger event is to calculate the transformation of the image based on geometric information, calibration, and / or real-time user tracking, for example, the location of the user, eyes, head, hands, position, movement and the like. Tracking can be done using one or more techniques such as background substation, pattern recognition, color segmentation, body part or other classifiers, and the like which are based on image processing. The transformation tracking calculation functionality can also be implemented in the other modules. The tracking can be a contribution to the transformation from other modules, like different tracking technologies which are not based on image recognition, for example, thermal heat, lasers, TOF, inertial sensors, orientation sensors, a mobile device with GPS location and orientation, etc. Control unit module 108 can be designed to configure system commissioning, camera device authentication and the like, can also provide information from the tracking transformation function in the geometry transformation module real or the augmented reality module and the like. With the factory calibration module 109, some of the information necessary to calculate the transformation to be applied to the image / video can be generated during factory calibration or can be calculated based on additional information regarding the actual orientation of the camera on site, for example, height above ground or desk and the like, 3D viewpoint, field of view (FOV) of the lens, and the like. Factory information and actual implementation geometry can be processed and delivered to the correct item in the system which will use the information for better calibration and accuracy. FIG. 2 represents an example of an augmented reality module, which can correspond to the augmented reality module 104 described above. Specifically, the augmented reality module can have a function of allowing a user to virtually apply makeup or try on glasses, hats, jewelry, etc. In this embodiment, the system obtains an input image or video from, for example, the computerized eye adjustment method 201, which is a camera positioned above the display screen. In a generalized form, the input image or video can come from any image or video source, for example, a user's smartphone, a security camera, Google glasses, a camera mobile, a head mounted display or a fixed camera. Additional embodiments may include additional geometric information which will assist in calculating proportion such as size, user gaze and the like. If the user's video or image comes from the eye adjustment module (calibrated image / video), a more detailed model can be created which allows body measurements, object placement, size detection, very precise and similar orientation. The additional information that can be calculated from the calibrated object or video can allow object adjustment, object replacement and insertion of new virtualized objects (glasses, jewelry, etc.) in the frame / video. The election module 202 can obtain election information from the interface 206 manually from the user (X, Y or name of the object) or automatically from an election process, for example, a mechanism that can automatically detect facial features like lips, the cheeks, eyes, the nose and the like. The module 203 can get the location and of samples of the color (or the average color of the feature, who may consist of several colors). The module 203 can use this information to create a black and white mask which is used first to generate a 2D or 3D shaded or colored textured mask. This information can then be used to apply virtualization to facial features, such as applying lipstick to lips or cheek red to cheeks. The technique to extract the module is based on a 3D color correlation or any other technique such as the shortest Euclidean distance between the average color of the object and the color of the pixels to separate the pixels from the object. of the entire image. By this process, the pixels belonging to the feature are identified and can be labeled as belonging to the facial feature. Virtualization can then be applied by changing the color or intensity of the pixels belonging to the facial feature, and the shape can be changed virtually by adding and / or removing pixels belonging to the facial feature. For example, the application of lipstick can be carried out on a subset of the pixels belonging to the lips and, optionally, to the pixel belonging to the face around a particular area of the lips so as to visually modify the vermilion limit of the lips. , for example, by improving the appearance of Cupid's bow. The decision as to whether or not the pixel is in the facial feature can be made at multiple levels and is not limited to the following examples: 1. The color correlation and the first decision can be based on a Euclidean distance threshold, where the Euclidean distance threshold is in the RGB color space or in the Chromatic color space. . Noise filtering by applying morphological operators such as dilution and erosion, which can improve the decision about which pixels are "mislabelled" as part or not of the facial feature. 3. The decision based on information from previous or forward frames, or from neighboring pixels in a row or around the pixel. This step represents a major decision in the process. 4. The distance of the object from the original election, which can be used as a threshold. 5. The extension of the surface of the object, whereby, if it is known that a facial characteristic is prolonged, or with reference to a generic form, then part of the noise can be eliminated by filtering. For example, by adjusting to a selected shape of lips and by adjusting to the image to eliminate noise by filtering. 6. The edges of the object, whereby the decision around the edges can be improved by edge detection which can be performed by high pass filters (HP) or other techniques. This can in particular be combined with the extension of the object surface to improve the fit. 7. A decision based on the energy of colors. One of the problems with color separation is that the color in low light conditions can be seen as black, and that the dynamic range of the decision is reduced. 8. A decision based on a classification technique that identifies a contour / landmark on the body element, and an additional curve algorithm can be applied to draw boundaries around it which ultimately represent the contours of the body element. A real-time classification can also be done in a module and / or a parallel technology such as an optimization by dedicated external classification chip (1 to 5 bits), or another AI technology which can help the segmentation of the body element . in an important way. Dark / black pixels can be isolated and other techniques can be applied to decide whether dark / black pixels belong to the facial feature or not, for example, the present invention can determine whether the pixel is located within the border of the characteristic, or if the distance of the energy from the color STD of the characteristic changes. 8. Use previous information regarding the expected feature form for best results. 9. In case the facial feature is a combination of multiple colors or shapes, multiple color correlations and combinations can be used. In addition, any of the multilevel methods specified above can be used to obtain a higher level decision regarding the facial characteristic. 10. The decision may also be based on a majority or a decision relating to a neighboring pixel / image as a weighted factor in the decision. In case the image decision is treated as a vector, it may be easier to look at the neighbors in the same row or column depending on how the image matrix is reshaped into a vector. 11. Estimating the skin tone of the article and the color STD can also add important information for characteristic segmentation. 12. Any combination of one or more of steps 1 to 11 above. When separating the facial features, masks can be created to allow virtualization. The mask can be used for rendering as a simple black and white mask. However, in order to create a convincing impression of a virtualized feature or object, additional information from the texture or appearance of the feature or object may be maintained. In order to obtain additional important information, the mask can be applied to the raster or ordinal video, and the RGB or grayscale tint or brightness scale on the object can be obtained. This information is much more precise and convincing for color changes since it saves the wrinkled texture, the shading, the reflection of light, the material signature and the like of the original object. The model mask can be constructed in layers for improved handling. Examples of a potential layered structure can be as follows: 1. Mask in black and white (to segment the characteristic or object). The black and white mask can be very important in distinguishing between the feature or object and the background or between the object and another element around the object. Multiple techniques to optimize the decision of can be used for mask / object boundaries. 2. Object edge mask - representing the edge or outline of the feature or object. 3. Red mask - representing the red areas of the feature or object. 4. Green mask - representing the green areas of the feature or object. 5. Blue mask - representing the blue areas of the feature or object. 6. Textures that apply to all color masks - representing the texture appearance of the feature or object. For the facial characteristic, this mask can designate the complexion. 7. Shadow or Brightness Mask - representing the shaded or bright areas of the feature or object. 8. Material light reflection mask representing the light reflection of the characteristic or object. 9. Light absorption mask representing absorption zones of characteristic or object. 10. Mask other sensors such as microwave, depth, ultrasound, ultra-band and the like. of light matter of the infrared, 11. Layers similar to those described above. Once the mask model has the required information, in order to change the color or texture, the rendering module 204 can be used to modify the specific layer or layers and regenerate the object from the multiple layers, resulting in a reproduced video 205, i.e. extremely colored masks can be used with different intensity blends to reproduce different lipstick colors according to a pre-selected palette of available lipstick colors from a certain brand. Since all the other masks remain the same, the lips will be reproduced with all of the shading, brightness, reflection, texture, etc. lips, but with a different color lipstick, thus reproducing a very realistic lipstick on the lips. The effect of certain layers can be introduced by multiplication or by adding the modified layer to the frame. Subtraction and division can also define relationships between layers. Additional techniques that allow handling of more complex items include an alignment technique, which can, based on a few points, stretch / transform an object or facial feature to insert it within the object's boundaries or of the characteristic handled. In one embodiment, the required change may be outside or within the boundaries of the original facial feature, and a modified mask for the new boundaries of the object may be created to replace the mask model d 'origin. In one embodiment, the required change can be obtained from a library of facial features. Using an alignment technique, the library mask can be applied to adjust it to the user's facial characteristic to reproduce a change in appearance created by makeup. For example, eyebrow shaping can be virtualized on the user's image using a library of various eyebrow shapes, to show the user what eyebrow shaping would look like on the user before the actual implementation of eyebrow shaping. In one embodiment, the mask can be used as pointers for virtual object alignment. In one embodiment, the mask can be used to align objects such as, for example, glasses, jewelry, a hat, etc. For example, the black and white mask that segments the facial features of the eyes can be used to adjust the glasses. In one embodiment, the stitching around the reference points of a body element can be carried out by linear interpolation between points, or by cubic interpolation, or by any generic polynomial interpolation, Chebyshevol interpolation, Lagrange multipliers for interpolation, etc. (interpolation can be performed between two or more neighboring benchmarks depending on the required curve and the stability of the interpolation. An interpolation error can be calculated by least squares compared to linear interpolation or any other technique can be used to manage a stability problem during higher level interpolation to eliminate oscillation. This seam interpolation can create a segmentation layer to assist in segmentation of the element body, or to help align a shape model, or to align an avatar model and define parameters to adapt the shape or avatar to the number of landmarks in real time. In one embodiment, the election of the single object or of multiple (l: n) objects can be obtained, which must be modeled. From the video, a mask is created per frame. , a raster by raster 3D or partially 3D model can be created. From this frame-by-frame model, different perspectives can be obtained and used to create a 3D model that includes some or all of the user's movements. Later, this information can be used to create a more compelling virtual skin. That is, the present method can use the user's own movements to form the model. consistent resolution In one embodiment, the rendering can be performed in the GPU, the CPU, the cloud GPU or the cloud CPU. The input elements to be reproduced can come from the CPU, from the user database in the cloud, or from an active link with the inventory / any other database / 3D printing, 'an e-commerce database, a social database and the like. In one embodiment, an accessory or any other article can be added by learning the dynamic movement and the mask model of the object concerned. In addition, the background can be increased so as to change or create a different environment using the same technique. Once all the required objects have been labeled, the required objects can be masked and the combined mask can be used to change the background. In one embodiment, the rendering module can reproduce the object with an improved object rendering technique and the frame can combine the resolution object, can smooth the edges, and can bring the object back to resolution required with better integration quality in the frame. Additional techniques include acting directly on the edge of the object by averaging the pixel value with a certain weighting factor to better blend the object with the background color. Figure 3 shows an augmented reality module that can change the appearance of a body part, the color, orientation and texture of a facial feature or an article or object in the foreground. background or background of the image, for example, the module can add hair to the user, change the eyes, skin and hair color of the user, can change the eye pose and the like . with a higher high to interpolate, The modules 301, 302, 303 and 306 can operate in a similar manner to the modules 201, 202, 203 and 206 of the general augmented reality module described above. Module 304 may have the ability to calculate or obtain additional information such as the pose of the head or the direction of movement of the body directly from the eye adjustment module or from module 307 using a dedicated detector for movement and 3D orientation of the article and can use this information to modify the required body part, for example, obtaining the pose of the head will correct the orientation of the eyes by modifying the eyes of the mask / model in the required direction. In addition, head detection can add hair in the correct orientation, a hat and the like. For example, in a more complex case, one might wish to represent a shorter length of a given hairstyle. Mask manipulation in module 304 may be necessary to create a shorter mask for the new hairstyle, and the difference between the original mask and the mask after manipulation may be a new mask for manipulation. In the new mask, a certain part will be the estimation of the exposed body parts of the user once the hair has been shortened (for example, the shoulders) and a certain part will represent the background which would be newly visible with the length. shorter hair. The new mask can be divided into body and background, and the new reproduced object can take the combination of the predicted background image and shoulders to create a new reproduced image. The result after reproducing the modified hair length in the video is a user with a shorter hair length before the actual haircut which is irreversible, at least for a period of time. Figure 4 shows calculation methods for creating a model for color and texture / complexion change, which can be referred to as Colograma. This technique is focused on parallel computing which can support a large number of users or a large number of frames / videos unlike the super high quality color changing techniques which can be found in software programs such as Photoshop . These processes can be time consuming and may not be practical to perform on a large number of user images or videos. The description of FIG. 4 is just an example and any derivative of the represented processing flow is part of the present invention. A challenge in changing a color of an object in a video or image is to accurately identify the relevant pixels of the object. In a video file, speed is a limiting factor for the applicable transformation. In Figure 4, a simplified example of a method for segmenting / extracting an object from a video is shown. The image or video to be modified is received in 401. In 402, the frame of the image or color video is converted into a line vector, which is optional, although the vectorization of the image can considerably decrease processing time. Furthermore, in 403, the effect of brightness is eliminated. There are many techniques to eliminate the effect of brightness. In this example, an energy averaging per pixel in the XYZ color space is used, dividing each pixel by the sum of XYZ. For example, a 3 X 3 matrix can be used for converting RGB to XYZ, using the chromaticity coordinates of an RGB system (xr, yr), (xg, yg) and (xb, yb) and its white reference (XW, YW, ZW). In parallel, in 404, the selection of the object is carried out by selecting all the points K (x, y) belonging to the object to be transformed. K is the number of objects / areas with a distinct color that can be segmented from the background or from other objects. Then, in 405, each point undergoes the same transformation as that carried out in the module 403. In 406, k iterations are carried out to find each pixel and to find the closest color. K> 2 in this technique. For each k, the 2D or 3D Euclidean distance is calculated. The minimum distance and the value of K are saved. This can be done on all pixels at once in a relatively quick process. dist = sqrt ((X-xi (k)). Λ 2 + (Y-yi (k)). ^ 2 + (Z-zi (k)). Λ 2) After k iterations, the labeled image can be obtained. The Euclidean distance "dist" is just an example of a calculation process to distinguish between colors; there are other methods for calculating the distance between colors, for example, a color distance model based on human color perception (chromatic, saturation and brightness), advanced calibration techniques to adapt sensitivity and color separation ability to the human eye as in CIE76, CIE94, CIEDE2000 and the like or any combination with an IR / 3D depth camera for histogram stretching, color integration over time or any other method to improve the sensitivity of color detection (module 411). The application or crossing of the additional information coming from the module 411 can take place at the distance comparison level 406, at the very end of the creation of the model 409, or any combination depending on the nature of the additional information ( determinists, statistics, time variables and the like). In addition to the color difference, it is also possible to use other techniques that can add information about the object to improve the decision such as: area probability (a given pixel must have neighbors or a certain mass pixels), area characteristic, border filters to isolate the object's border before the final decision is made, depth information (which should generally match the contour of the depth information with the final image of the 2D or 3D object), integration over time to determine if the pixel is in the object area on multiple frames, and the like. Module 407 is an example of an embodiment of how to distinguish between the colors required and the other color space. In module 407, all pixels with a distance greater than a threshold are set to zero as irrelevant (a pixel with a different color from each of the colors 1 to k), and 1 is assigned to all relevant pixels, generating therefore a binary mask. In 408, a black and white filter can be used to eliminate noise and smooth the shape of the object. Other techniques can be used to improve the decision regarding the pixels belonging to the object. Consequently, an index for all relevant colors starts at 2 and goes up to K + l. Module 407 is an example where you want to separate a specific color or colors. Here, all indices can be set to zero, except for the required index. The process is as follows: zeroing all irrelevant indices, obtaining a background and irrelevant color value = 0, and electing the required color object labeled = 1. If there are several colors in the object, 1 can be assigned to any chosen index from 2 to k + l and zero to all the others. In module 409, the black and white mask obtained is applied to the original color image and the 3D model for color and texture changes is obtained. The model can be a 2D alpha channel in gray scale or 3D in the color space. The module 410 can obtain a 2D or 3D model of the object. In the case of video from a single camera, it is possible to obtain a 3D model even if the user moves in front of the camera, for example, while turning in front of the camera. In this case, it is also possible to obtain an object measurement in multiple sections to estimate the user's 3D body or facial curves. The model based just on the color difference is not perfect in terms of quality, so additional information and techniques can be used to improve the quality of the object model (see module 411). Additional information techniques such as interpolation and decimation or edge smoothing can be applied after processing via the module 410 in order to improve the quality of the model. FIG. 5 illustrates an example of the digital mirror according to an embodiment of the invention. This embodiment is configured for close-up imaging, for example, for makeup, glasses, etc. The digital mirror 500 comprises a digital display 505, at least one camera 510, and a lighting device 515. As illustrated in the partial section on the outside, the lighting device comprises a light diffuser 520 and a plurality of LEDs of at least two different temperatures. The LEDs are coupled to a controller that controls the intensity of each LED according to the desired light temperature. The fit can be defined in accordance with the environmental condition, skin tone, etc. The display screen is divided into two sections: section 503 which displays the image from the camera, after appropriate transformation and translation, and the consequent section, transformed realization 504 which is used as a user interface using the touch capability of the display screen 505. As can be seen in Figure 5, the camera obtains the image from the top of the display screen, so that if the image from the camera were displayed as it was, it would be distorted and n 'would not appear as a mirror image. In addition, depending on the size of the user, the image of the head would appear at different positions in the display screen 505. By the image from the camera is first according to any one modes described above. In addition, the image is translated so as to position the user's head in an area 518 designated as the center of the screen. While the image is obtained from the camera, the lighting condition and complexion can be analyzed by the controller, and the controller can then apply different activation signals to the LEDs at various temperatures so as to provide the appropriate lighting on the user's face. Alternatively, lighting temperature controls may be displayed digitally on the user interface 504 to allow the user to control the temperature and / or intensity of the lighting. According to one embodiment of the invention, the digital mirror is used to record a makeup session, for example, a makeup demonstration session in a store. During the makeup session, the presenter can use different products and use different techniques to apply makeup. An object of this embodiment is to provide the user with easily accessible recorded and edited video so that the user can practice the application technique and is also able to re-order the products used during the demonstration. In one embodiment, the interface section includes pre-programmed buttons indicating the various stages of the makeup session, for example, lipstick, eyelashes, eyebrows, cheekbones, etc. As the presenter begins each session, the presenter clicks the appropriate button. Furthermore, the presenter can enter the product used, for example, by selecting from a menu, scanning a barcode, or simply by holding the product in front of the camera. The controller can be programmed to recognize the product from the image, for example, by identifying a barcode, using character recognition to read the label, using image mapping to a library of product images, etc. According to one embodiment, the stored video is then automatically edited by cutting it into sub-sessions. Sub-sessions may include, for example: foundation, powder, self-tanner, concealer, lipstick, lip gloss, mascara, eye shadow, blush, eyebrow shade and eyeliner. This can be done, for example, by identifying the timing that the presenter clicked on the respective button. The start of each of these sub-sessions is marked in the stored session video, for example, with metadata, and icons are generated, each icon having a link to a corresponding mark. This will mark each sub-session and allow the user to skip or jump to specific sessions of interest. By "trimming" the video according to the sub-sessions, the client will be able to browse specific sub-sessions quickly and browse the history by sub-session. Sessions and sub-sessions can be reviewed using postage stamps, icons, etc. user comments to add in this other way, In accordance with other characteristics, the make-up artists have the ability to take a snapshot or short recordings of the articles that the make-up artists used during the session, of the samples they provided to the client, and / or of the items that the client bought. These snapshots can be stored in a separate part of the stored video so as to be displayed in a separate part of the screen, for example, as separate postage pictures. In addition, the system can use optical character recognition or a barcode reader, or QR code reader, or classifiers, or image matching or RFID tags, or any reader label to identify items and provide textual information about the items. Alternatively, or in addition, an input device allows the makeup artist regarding the items, the user can order the items later. According to another embodiment, the mirror controller is coupled to the cash register and / or to the accounting system. This can be achieved by integrating the mirror with an existing payment system, for example, a point of sale system. In this way, the controller receives information about the items that the customer has actually purchased. In some embodiments, the mirror can be activated in a "co-seller" mode. In this mode, the co-seller can see what the customer has purchased and what samples the customer has received. In some embodiments, when the mirror is in "co-seller" mode, the user (that is, the co-seller) is not able to view the videos. Instead, this mode is intended to help the co-seller follow the customer and in case the customer asks additional questions. In addition, the name or ID of the co-seller who performed the session or provided the samples can be stored in association with the recorded session and the stored product information, so that each time the customer purchases this item, the co-seller can be credited for this purchase. Other features of these embodiments may include: the option of having a voice recording during the session, which is good for a tutorial; the ability to fast forward records; integration with virtual make-up, also post-treatment; integrations with virtual glasses and other accessories; a screen of any size including tablets and phones, the classic memory mirror; a video can be stored and edited with and without distortion correction; stream sessions to friends, family or Facebook chat rooms, etc. In particular, certain characteristics are presented in this document with respect to a particular embodiment. However, it should be appreciated that each feature can be implemented in combination with any embodiment presented and any other feature presented, in particular desired. In the recording is performed with a wireless microphone, a wired microphone, a focused acoustic microphone, etc. According to other embodiments, virtual makeup from a distance is possible, for example the remote application of makeup by means of mirrors. This embodiment is implemented via a mirror-to-mirror connection. For example, the co-seller is in a store in Paris, while the customer is at home. Both the co-seller and the customer use MemoMini. The co-seller can present the application of makeup remotely to the client and save the session. function of a mode of The implementation application, In this case, the image captured by the mirror camera positioned at the client's location is projected both on the client's screen and on the co-seller's screen. The co-seller then uses a user interface to apply makeup to the image projected on the mirror at the location of the co-seller. The input from the user interface of the co-seller is used to modify the image projected on the two screens, i.e. to modify the image on the screen of the co-seller and transmitted to the customer's location to modify the image displayed on the client's mirror. That is, the presenter can use virtualized makeup, as will be described further below. As explained above, when performing makeup, to get good video results, it is important to provide good lighting. According to one embodiment, the frame of the mirror comprises a plurality of light sources, each providing light at a certain temperature. The lights are controlled by a processor which adjusts the brightness of the light sources to provide the appropriate light temperature. In one embodiment, a sensor detects the lighting environment around the face of the person receiving the demonstration. According to another embodiment, a processor analyzes the image from the camera to determine the lighting environment around the face of the person receiving the demonstration. Using the lighting information, and optionally the skin tone of the person receiving the demonstration, the controller adjusts the light sources to provide the lighting at a desired temperature. A feedback loop can be generated by the controller analyzing the image from the camera and adjusting the lighting temperature until a correct image is obtained. In one embodiment, the light sources are LEDs with different lighting temperatures. For example, LEDs with different temperatures can be interleaved, so that their full output can be controlled to achieve the desired lighting temperature. In general, during a makeup session, one must choose from the available products, apply the product, and see if it looks good. It is a boring process, even when it is done virtually. The features presented here provide a different processing flow to arrive at the desired product. In accordance with this processing flow, a user chooses the attributes of a product, independently of an actual product. The user can then manipulate the attributes until the user arrives at the desired attributes. At this point, the system uses the desired attributes to map them to a list of available products and selects the product with the closest match. The product with the closest match can then be presented to the user. In one embodiment, the system will present to the user only the available colors that the user can purchase, and once he has chosen a color, the system can transform the selection into a product to be placed in the basket. For a better understanding of these features, an example is provided here using a lipstick. A similar process can be used for other makeup products. According to this processing flow, illustrated in FIG. 6, a makeup artist or a user receives palettes for a selection. For this example, the palettes can include, for example, color, color intensity or transparency, color effect (for example, gloss), etc. The makeup artist or user selects attributes from the palettes to apply virtual lipstick with the attributes selected on the image projected on the digital mirror screen. This can be done either by using virtual brush strokes, or by system recognition of the pixels belonging to the lips and applying the attributes to these pixels to generate an image of the lips with the lipstick applied. Another option is to have various models of lip shapes stored in the system. The system identifies the location of the lips in the image and then overlays a selected model with the attributes selected on the image of the lips. Regardless of the method used to virtually color the lips, the system continually tracks the location of the lips in the image and adjusts the coloring as needed to simulate the lipstick applied to the lips, even if the user moves his head so that the location of the lips in the image changes. While the image is presented with lipstick, the user can modify the attributes and the results are virtualized in real time on the control screen. This process continues until it is indicated that the desired attributes have been obtained. At this point, the system "maps" the selected attributes to a product database. This can be done, for example, by creating a correspondence table beforehand, the correspondence table comprising the attributes of the available products. When an appropriate match is obtained, the name or image of the product can be presented to the user. The user can then place this product in a virtual shopping cart. The general processing flow executed by the digital makeup mirror is shown in Figure 6. In step 600, the system displays the available attribute palettes. In step 602, it is determined whether an attribute has been selected. If so, the attribute is applied to the image on the screen. Otherwise, the system returns to the detection of a selection of attributes. Although in this flowchart only one attribute selection is shown, the selection process can be repeated for any number of attributes, for example, color, shine, texture, etc. When the attribute has been selected and applied, in step 606, the system monitors any change in the attribute, for example, a change in lipstick color. In step 608, any detected change is applied to the image on the screen. In step 610, the system monitors whether a selection has been indicated as completed. If so, the system optionally performs step 612, which consists of storing the image reflecting the final selection of attributes. This step can be skipped, especially when the entire session is recorded as video. Instead of skipping this step, the system can simply insert metadata for the frames showing the final selection of attributes. This allows the user to jump to these images when watching the session. In step 614, the system compares or maps the selected attributes to a list of attributes in a product database to identify a product that best matches the selected attributes. In step 616, the system presents on the screen the product which best corresponds to the selected attributes. In step 616, the system can also search for data relating to the product, for example, a price, a size, complementary products, etc. and present them to the user on the screen. Optionally, in step 618, the system adds the product to a virtual shopping cart. According to yet another embodiment, the system generates an electronic file which allows the production of a product which corresponds to the exact attributes selected. For example, the facial according to classifications. Optionally, the system can generate an electronic file that allows the mixing of lipstick colors to produce a lipstick with the exact characteristics selected by the user. Figure Ί illustrates another embodiment for performing a virtual makeup session. This process virtually applies a complete “look” to the image of the user as projected on the screen. The "look" includes a complete make-up according to a specific style recovered from a library of previously programmed styles. In step 700, the system identifies the facial characteristics of the image of the user projected on the screen. Features include, for example, a head shape, skin tone, lip shape, shape and / or height or pronouncement of cheekbones, eye shape (for example, very sunken, slanted, outward pointing top, exterior pointing down, covered, globular, round, close together, widely spaced, almond shaped), etc. Using the determined characteristics, in step 702, the system classifies the characteristics. A pre-programmed list of the system can use the classifications of facial characteristics to generate a unitary facial classification. In step 704, the system searches for a correspondence with the classification of the user in previously stored facial classifications with which a virtual makeup is associated. The system then selects a look with the best match in step 706. The selection of a look made by the system is performed in accordance with a programmed match of looks and facial classifications. That is, the system is programmed to match the best makeup look with the particular facial classification selected. Alternatively, or in addition, the user can select a look from a list provided by the system. In step 708, the system virtually applies the make-up look to the image of the user projected on the screen. In step 710, if the user selects the presented look, the system retrieves a list of makeup products which can be used to produce this look. Optionally, the system displays the product in step 714 and / or places the product in a virtual shopping cart in step 716. As can be understood from the presentation provided in this document, aspects of the invention include a controller coupled to a digital screen and a digital camera, the controller incorporating a processor and memory, and being preprogrammed to perform operations comprising: receiving the video stream from the camera; identifying facial features in the video stream; flipping each frame relative to a vertical axis to replace the right side with the left side of the image to, thereby, imitate a mirror image; transforming each frame to emulate an image of the user looking directly at the digital camera as if it were placed behind the screen; the framing of each frame to allow the display of the user's face in the center of the digital screen; and displaying the video stream on the digital screen after the inversion, transformation and framing operations. Bandpay & Greuter 10.11.2017 30 ^ 02
权利要求:
Claims (20) [1" id="c-fr-0001] 1. Digital makeup mirror, comprising: a digital screen; a digital camera positioned to generate a video stream of a user's face; a controller coupled to the digital screen and the digital camera and preprogrammed to perform the operations comprising: receiving the video stream from the camera; identifying facial features in the video stream; flipping each frame relative to a vertical axis to replace the right side with the left side of the image to, thereby, imitate a mirror image; transforming each frame to emulate an image of the user looking directly at a camera positioned behind the screen; the framing of each frame to allow the display of the user's face in the center of the digital screen; and displaying the video stream on the digital screen after the inversion, transformation and framing operations. [2" id="c-fr-0002] The digital makeup mirror according to claim 1, wherein the controller further performs the operation comprising identifying facial features in each frame of the video stream so as to track the location of each of the facial features in each frame. [3" id="c-fr-0003] The digital makeup mirror according to claim 2, wherein the facial features include one or more of the face, lips, eyes and eyebrows. [4" id="c-fr-0004] The digital makeup mirror of claim 3, wherein the facial features further include the chin and the nose. [5" id="c-fr-0005] 5. Digital makeup mirror according to claim 1, wherein the identification of facial features EYE.00573.FRl-2017.il.10-IrregNot-Reply Bandpay & Greuter 10.11.2017 3O ^ o 5O2 includes the identification of the contours of facial features. [6" id="c-fr-0006] 6. A digital makeup mirror according to claim 1, in which the identification of facial characteristics comprises the identification of the pixels belonging to each facial characteristic. [7" id="c-fr-0007] 7. Digital makeup mirror according to claim 1, in which the controller further performs the operation comprising: the display on the digital screen of a palette of colors corresponding to a makeup category; allowing the user to designate a selection from the color palette; the digital application of the selection to one of the facial characteristics corresponding to the makeup category. [8" id="c-fr-0008] 8. The digital makeup mirror according to claim 7, wherein the digital application of the selection comprises changing the attributes of the pixels belonging to the facial characteristic. [9" id="c-fr-0009] The digital makeup mirror of claim 7, wherein the display of the color palette includes displaying a plurality of colors and a plurality of color attributes. [10" id="c-fr-0010] 10. Digital makeup mirror according to claim 9, wherein the color attributes include at least transparency and shine. [11" id="c-fr-0011] 11. Digital makeup mirror according to claim 10, wherein the controller further performs the operation of allowing the user to modify the attributes after the digital application of the selection. [12" id="c-fr-0012] The digital makeup mirror of claim 7, wherein the digital application of the selection further includes tracking the location of each of the facial features in each frame and the application EYE.00573.FR1-2017.11.10-IrregNot-Reply Bandpay & Greuter 10.11.2017 30 ^ 02 of a change in the numerical application of the selection in accordance with the movement of the facial characteristic in frame check. [13" id="c-fr-0013] 13. The digital makeup mirror of claim 7, wherein the selection is applied digitally using a mask having a shape of a selected facial feature. [14" id="c-fr-0014] 14. Digital makeup mirror according to claim 7, further comprising a correspondence table associating the color palette and the product data. [15" id="c-fr-0015] 15. Digital makeup mirror according to claim 7, wherein the controller further performs the operation comprising the display of a product corresponding to the designated selection. [16" id="c-fr-0016] 16. A digital makeup mirror according to claim 1, in which the controller further performs the operation comprising: displaying a plurality of pre-programmed makeup looks on the digital screen; allowing the user to designate a selection from the makeup looks; the digital application of the selection to the face of the user projected on the digital screen. [17" id="c-fr-0017] 17. Digital makeup mirror according to claim 16, in which the controller further performs the operation comprising the display on the screen of images of the products used for generating the makeup look. [18" id="c-fr-0018] 18. The digital makeup mirror of claim 1, further comprising a lighting device comprising a plurality of light sources with a plurality of temperatures. [19" id="c-fr-0019] 19. The digital makeup mirror according to claim 18, wherein the controller further performs the operation comprising changing the lighting intensity of the EYE.00573.FRl-2017.11.10-IrregNot-Reply Bandpay & Greuter 10.11.2017 30 ^ 02 plurality of light sources to generate a desired overall light temperature. [20" id="c-fr-0020] 20. Digital makeup mirror according to claim 18, in which the controller further performs the operation comprising: the projection on the digital screen of a selection of lighting temperatures; allowing the user to designate a selection from the selection of lighting temperatures; changing the lighting intensity of the plurality of light sources according to the designation of The user. EYE.00573.FRl-2017.11.10-IrregNot-Reply 1/6 2/6 CM S1- ££sJT ' vs:= 3 i s. § s § I m * "><3>m oCM Video reproduced © - ^ - 205 FIG. 2 3/6 4/6 (^ WféQ or image to modify 401 K (x, y) in the pixel space of the objects to be modified Convert the frame to a column vector [Ixnl _ λμ N = width x height x 3 Eliminate the effect of shine t Convert RGB to XYZ 2. Divide each pixel by the sum of XYZ '403 405 ' Sample the color in K (x, y) and perform the color transformation as in 403 for all K 411 406 Additional information other than the color concerning the object which can help in the mask decision: 1. Depth map, for example, infrared, 3D, RF camera or sensor (after some alignment with the real 2D or 3D image) 2. Object shape borders, texture and location characteristics 3. Global or local histogram stretching 4. Previous frames (probability that the pixel is part of the object) Find the 3D distance from the first K (x, y) to image of 403, additional applications from 411 Repeat for all your K sampling points Save the minimum distance and recall the K index with the shortest distance. The result will be the labeled image. I Assign label = 0 to all distances> threshold Add the labeled image there Additional 407A mentioned to improve decision as to _if a pixel is part of the object or not_ .......................... . I Apply the mask on the color image to obtain the 2D / 3D gray scale / color mask Apply / cross decision with additional information from 411 Μθ9
类似技术:
公开号 | 公开日 | 专利标题 FR3053502A1|2018-01-05|SYSTEM AND METHOD FOR DIGITAL MAKE-UP MIRROR JP2020526809A|2020-08-31|Virtual face makeup removal, fast face detection and landmark tracking US20180075524A1|2018-03-15|Applying virtual makeup products US9058765B1|2015-06-16|System and method for creating and sharing personalized virtual makeovers CN105791692B|2020-04-07|Information processing method, terminal and storage medium EP3338217B1|2021-06-16|Feature detection and masking in images based on color distributions US8908904B2|2014-12-09|Method and system for make-up simulation on portable devices having digital cameras US8884980B2|2014-11-11|System and method for changing hair color in digital images EP2450852A1|2012-05-09|Method and device for virtual simulation of an image JP3984191B2|2007-10-03|Virtual makeup apparatus and method US20210383460A1|2021-12-09|Generating Virtual Makeup Products US20160042557A1|2016-02-11|Method of applying virtual makeup, virtual makeup electronic system, and electronic device having virtual makeup electronic system US20110234581A1|2011-09-29|Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature JP4435809B2|2010-03-24|Virtual makeup apparatus and method WO2007128117A1|2007-11-15|Method. system and computer program product for automatic and semi-automatic modification of digital images of faces WO2018005884A1|2018-01-04|System and method for digital makeup mirror JP6055160B1|2016-12-27|Cosmetic information providing system, cosmetic information providing apparatus, cosmetic information providing method, and program WO2017177259A1|2017-10-19|System and method for processing photographic images FR2920938A1|2009-03-13|Image simulating method for beauty industry, involves deforming parametric models to adapt to contours of features on face, and detecting and analyzing cutaneous structure of areas of face by maximizing gradient flow of brightness Khan et al.2018|Digital makeup from internet images FR2793925A1|2000-11-24|IMAGE TRANSFORMING DEVICE FOR LOCALLY MODIFYING IMAGES
同族专利:
公开号 | 公开日 IL264002D0|2019-01-31| EP3479351A1|2019-05-08| US20180278879A1|2018-09-27| FR3053502B1|2021-07-30| CA2963108A1|2017-12-29| CN109690617A|2019-04-26| EP3479351A4|2020-02-19| RU2019101747A|2020-07-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8107672B2|2006-01-17|2012-01-31|Shiseido Company, Ltd.|Makeup simulation system, makeup simulator, makeup simulation method, and makeup simulation program| JP2009064423A|2007-08-10|2009-03-26|Shiseido Co Ltd|Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program| US8139122B2|2007-08-20|2012-03-20|Matthew Rolston Photographer, Inc.|Camera with operation for modifying visual perception| US9554111B2|2010-03-08|2017-01-24|Magisto Ltd.|System and method for semi-automatic video editing| US8908904B2|2011-12-28|2014-12-09|Samsung Electrônica da Amazônia Ltda.|Method and system for make-up simulation on portable devices having digital cameras| US9224248B2|2012-07-12|2015-12-29|Ulsee Inc.|Method of virtual makeup achieved by facial tracking|TWI585711B|2016-05-24|2017-06-01|泰金寶電通股份有限公司|Method for obtaining care information, method for sharing care information, and electronic apparatus therefor| CN109407912A|2017-08-16|2019-03-01|丽宝大数据股份有限公司|Electronic device and its offer examination adornment information approach| US10431010B2|2018-02-09|2019-10-01|Perfect Corp.|Systems and methods for virtual application of cosmetic effects to a remote user| CN109272473B|2018-10-26|2021-01-15|维沃移动通信(杭州)有限公司|Image processing method and mobile terminal| WO2021150880A1|2020-01-22|2021-07-29|Stayhealthy, Inc.|Augmented reality custom face filter| US11212483B2|2020-02-14|2021-12-28|Perfect Mobile Corp.|Systems and methods for event-based playback control during virtual application of makeup effects| TWI733377B|2020-03-17|2021-07-11|建國科技大學|Multifunctional hairdressing mirror table with hairstyle system| CN112001303A|2020-08-21|2020-11-27|四川长虹电器股份有限公司|Television image-keeping device and method|
法律状态:
2018-04-20| PLFP| Fee payment|Year of fee payment: 2 | 2019-04-18| PLFP| Fee payment|Year of fee payment: 3 | 2020-04-20| PLFP| Fee payment|Year of fee payment: 4 | 2021-04-23| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201662356475P| true| 2016-06-29|2016-06-29| US62356475|2016-06-29| US201662430311P| true| 2016-12-05|2016-12-05| US62430311|2016-12-05| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|